62 research outputs found
Topological and Dynamical Complexity of Random Neural Networks
Random neural networks are dynamical descriptions of randomly interconnected
neural units. These show a phase transition to chaos as a disorder parameter is
increased. The microscopic mechanisms underlying this phase transition are
unknown, and similarly to spin-glasses, shall be fundamentally related to the
behavior of the system. In this Letter we investigate the explosion of
complexity arising near that phase transition. We show that the mean number of
equilibria undergoes a sharp transition from one equilibrium to a very large
number scaling exponentially with the dimension on the system. Near
criticality, we compute the exponential rate of divergence, called topological
complexity. Strikingly, we show that it behaves exactly as the maximal Lyapunov
exponent, a classical measure of dynamical complexity. This relationship
unravels a microscopic mechanism leading to chaos which we further demonstrate
on a simpler class of disordered systems, suggesting a deep and underexplored
link between topological and dynamical complexity
Regular graphs maximize the variability of random neural networks
In this work we study the dynamics of systems composed of numerous
interacting elements interconnected through a random weighted directed graph,
such as models of random neural networks. We develop an original theoretical
approach based on a combination of a classical mean-field theory originally
developed in the context of dynamical spin-glass models, and the heterogeneous
mean-field theory developed to study epidemic propagation on graphs. Our main
result is that, surprisingly, increasing the variance of the in-degree
distribution does not result in a more variable dynamical behavior, but on the
contrary that the most variable behaviors are obtained in the regular graph
setting. We further study how the dynamical complexity of the attractors is
influenced by the statistical properties of the in-degree distribution
Distributed synaptic weights in a LIF neural network and learning rules
Leaky integrate-and-fire (LIF) models are mean-field limits, with a large
number of neurons, used to describe neural networks. We consider inhomogeneous
networks structured by a connec-tivity parameter (strengths of the synaptic
weights) with the effect of processing the input current with different
intensities. We first study the properties of the network activity depending on
the distribution of synaptic weights and in particular its discrimination
capacity. Then, we consider simple learning rules and determine the synaptic
weight distribution it generates. We outline the role of noise as a selection
principle and the capacity to memorized a learned signal.Comment: Physica D: Nonlinear Phenomena, Elsevier, 201
Multiscale analysis of slow-fast neuronal learning models with noise
International audienceThis paper deals with the application of temporal averaging methods to recurrent networks of noisy neurons undergoing a slow and unsupervised modification of their connectivity matrix called learning. Three time-scales arise for these models: (i) the fast neuronal dynamics, (ii) the intermediate external input to the system, and (iii) the slow learning mechanisms. Based on this time-scale separation, we apply an extension of the mathematical theory of stochastic averaging with periodic forcing in order to derive a reduced deterministic model for the connectivity dynamics. We focus on a class of models where the activity is linear to understand the specificity of several learning rules (Hebbian, trace or anti-symmetric learning). In a weakly connected regime, we study the equilibrium connectivity which gathers the entire 'knowledge' of the network about the inputs. We develop an asymptotic method to approximate this equilibrium. We show that the symmetric part of the connectivity post-learning encodes the correlation structure of the inputs, whereas the anti-symmetric part corresponds to the cross correlation between the inputs and their time derivative. Moreover, the time-scales ratio appears as an important parameter revealing temporal correlations
Relative entropy minimizing noisy non-linear neural network to approximate stochastic processes
A method is provided for designing and training noise-driven recurrent neural
networks as models of stochastic processes. The method unifies and generalizes
two known separate modeling approaches, Echo State Networks (ESN) and Linear
Inverse Modeling (LIM), under the common principle of relative entropy
minimization. The power of the new method is demonstrated on a stochastic
approximation of the El Nino phenomenon studied in climate research
- …